21 research outputs found

    Joint attention and perceptual experience

    Get PDF
    Joint attention customarily refers to the coordinated focus of attention between two or more individuals on a common object or event, where it is mutually “open” to all attenders that they are so engaged. We identify two broad approaches to analyse joint attention, one in terms of cognitive notions like common knowledge and common awareness, and one according to which joint attention is fundamentally a primitive phenomenon of sensory experience. John Campbell’s relational theory is a prominent representative of the latter approach, and the main focus of this paper. We argue that Campbell’s theory is problematic for a variety of reasons, through which runs a common thread: most of the problems that the theory is faced with arise from the relational view of perception that he endorses, and, more generally, they suggest that perceptual experience is not sufficient for an analysis of joint attention

    Are non-human primates Gricean? Intentional communication in language evolution

    Get PDF
    The field of language evolution has recently made Gricean pragmatics central to its task, particularly within comparative studies between human and non-human primate communication. The standard model of Gricean communication requires a set of complex cognitive abilities, such as belief attribution and understanding nested higher-order mental states. On this model, non-human primate communication is then of a radically different kind to ours. Moreover, the cognitive demands in the standard view are also too high for human infants, who nevertheless do engage in communication. In this paper I critically assess the standard view and contrast it with an alternative, minimal model of Gricean communication recently advanced by Richard Moore. I then raise two objections to the minimal model. The upshot is that this model is conceptually unstable and fails to constitute a suitable alternative as a middle ground between full-fledged human communication and simpler forms of non-human animal communication

    The nature of joint attention: perception and other minds

    Get PDF

    Coordinating attention requires coordinated senses

    Get PDF
    From playing basketball to ordering at a food counter, we frequently and effortlessly coordinate our attention with others towards a common focus: we look at the ball, or point at a piece of cake. This non-verbal coordination of attention plays a fundamental role in our social lives: it ensures that we refer to the same object, develop a shared language, understand each other’s mental states, and coordinate our actions. Models of joint attention generally attribute this accomplishment to gaze coordination. But are visual attentional mechanisms sufficient to achieve joint attention, in all cases? Besides cases where visual information is missing, we show how combining it with other senses can be helpful, and even necessary to certain uses of joint attention. We explain the two ways in which non-visual cues contribute to joint attention: either as enhancers, when they complement gaze and pointing gestures in order to coordinate joint attention on visible objects, or as modality pointers, when joint attention needs to be shifted away from the whole object to one of its properties, say weight or texture. This multisensory approach to joint attention has important implications for social robotics, clinical diagnostics, pedagogy and theoretical debates on the construction of a shared world

    The impact of joint attention on the sound-induced flash illusions

    Get PDF
    Humans coordinate their focus of attention with others, either by gaze following or prior agreement. Though the effects of joint attention on perceptual and cognitive processing tend to be examined in purely visual environments, they should also show in multisensory settings. According to a prevalent hypothesis, joint attention enhances visual information encoding and processing, over and above individual attention. If two individuals jointly attend to the visual components of an audiovisual event, this should affect the weighing of visual information during multisensory integration. We tested this prediction in this preregistered study, using the well-documented sound-induced flash illusions, where the integration of an incongruent number of visual flashes and auditory beeps results in a single flash being seen as two (fission illusion) and two flashes as one (fusion illusion). Participants were asked to count flashes either alone or together, and expected to be less prone to both fission and fusion illusions when they jointly attended to the visual targets. However, illusions were as frequent when people attended to the flashes alone or with someone else, even though they responded faster during joint attention. Our results reveal the limitations of the theory that joint attention enhances visual processing as it does not affect temporal audiovisual integration

    Cognitive penetration and implicit cognition

    Get PDF
    Cognitive states, such as beliefs, desires and intentions, may influence how we perceive people and objects. If this is the case, are those influences worse when they occur implicitly rather than explicitly? Here we show that cognitive penetration in perception generally involves an implicit component. First, the process of influence is implicit, making us unaware that our perception is misrepresenting the world. This lack of awareness is the source of the epistemic threat raised by cognitive penetration. Second, the influencing state can be implicit, though it can also be or become explicit. Being unaware of the content of the influencing state, we argue, does not make as much difference to the epistemic threat as it does to the epistemic responsibility of the agent. Implicit influencers cannot be examined for their accuracy and justification, and cannot be voluntarily accepted by the perceiver. Conscious awareness, however, is not sufficient for attributing blame to the agent. An equally important condition is the degree of control that they can exercise to change the contents that influence perception or stop their influence. Here we suggest that such control can also result from social influence, and that cognitive penetrability of perception is therefore also a social issue

    The impact of joint attention on the sound-induced flash illusions

    Get PDF
    Humans coordinate their focus of attention with others, either by gaze following or prior agreement. Though the effects of joint attention on perceptual and cognitive processing tend to be examined in purely visual environments, they should also show in multisensory settings. According to a prevalent hypothesis, joint attention enhances visual information encoding and processing, over and above individual attention. If two individuals jointly attend to the visual components of an audiovisual event, this should affect the weighing of visual information during multisensory integration. We tested this prediction in this preregistered study, using the well-documented sound-induced flash illusions, where the integration of an incongruent number of visual flashes and auditory beeps results in a single flash being seen as two (fission illusion) and two flashes as one (fusion illusion). Participants were asked to count flashes either alone or together, and expected to be less prone to both fission and fusion illusions when they jointly attended to the visual targets. However, illusions were as frequent when people attended to the flashes alone or with someone else, even though they responded faster during joint attention. Our results reveal the limitations of the theory that joint attention enhances visual processing as it does not affect temporal audiovisual integration

    A Systems-Level Study Reveals Regulators of Membrane-less Organelles in Human Cells

    Get PDF
    Membrane-less organelles (MLOs) are liquid-like subcellular compartments that form through phase separation of proteins and RNA. While their biophysical properties are increasingly understood, their regulation and the consequences of perturbed MLO states for cell physiology are less clear. To study the regulatory networks, we targeted 1,354 human genes and screened for morphological changes of nucleoli, Cajal bodies, splicing speckles, PML nuclear bodies (PML-NBs), cytoplasmic processing bodies, and stress granules. By multivariate analysis of MLO features we identified hundreds of genes that control MLO homeostasis. We discovered regulatory crosstalk between MLOs, and mapped hierarchical interactions between aberrant MLO states and cellular properties. We provide evidence that perturbation of pre-mRNA splicing results in stress granule formation and reveal that PML-NB abundance influences DNA replication rates and that PML-NBs are in turn controlled by HIP kinases. Together, our comprehensive dataset is an unprecedented resource for deciphering the regulation and biological functions of MLOs

    Passive Noise Filtering by Cellular Compartmentalization

    Full text link

    Computer vision for image-based transcriptomics

    Get PDF
    Single-cell transcriptomics has recently emerged as one of the most promising tools for understanding the diversity of the transcriptome among single cells. Image-based transcriptomics is unique compared to other methods as it does not require conversion of RNA to cDNA prior to signal amplification and transcript quantification. Thus, its efficiency in transcript detection is unmatched by other methods. In addition, image-based transcriptomics allows the study of the spatial organization of the transcriptome in single cells at single-molecule, and, when combined with superresolution microscopy, nanometer resolution. However, in order to unlock the full power of image-based transcriptomics, robust computer vision of single molecules and cells is required. Here, we shortly discuss the setup of the experimental pipeline for image-based transcriptomics, and then describe in detail the algorithms that we developed to extract, at high-throughput, robust multivariate feature sets of transcript molecule abundance, localization and patterning in tens of thousands of single cells across the transcriptome. These computer vision algorithms and pipelines can be downloaded from: https://github.com/pelkmanslab/ImageBasedTranscriptomics
    corecore